35 research outputs found

    A neural marker for social bias toward in-group accents

    Get PDF
    Accents provide information about the speaker's geographical, socio-economic, and ethnic background. Research in applied psychology and sociolinguistics suggests that we generally prefer our own accent to other varieties of our native language and attribute more positive traits to it. Despite the widespread influence of accents on social interactions, educational and work settings the neural underpinnings of this social bias toward our own accent and, what may drive this bias, are unexplored. We measured brain activity while participants from two different geographical backgrounds listened passively to 3 English accent types embedded in an adaptation design. Cerebral activity in several regions, including bilateral amygdalae, revealed a significant interaction between the participants' own accent and the accent they listened to: while repetition of own accents elicited an enhanced neural response, repetition of the other group's accent resulted in reduced responses classically associated with adaptation. Our findings suggest that increased social relevance of, or greater emotional sensitivity to in-group accents, may underlie the own-accent bias. Our results provide a neural marker for the bias associated with accents, and show, for the first time, that the neural response to speech is partly shaped by the geographical background of the listener

    Effects of emotional valence and arousal on the voice perception network

    Get PDF
    Several theories conceptualise emotions along two main dimensions: valence (a continuum from negative to positive) and arousal (a continuum that varies from low to high). These dimensions are typically treated as independent in many neuroimaging experiments, yet recent behavioural findings suggest that they are actually interdependent. This result has impact on neuroimaging design, analysis and theoretical development. We were interested in determining the extent of this interdependence both behaviourally and neuroanatomically, as well as teasing apart any activation that is specific to each dimension. While we found extensive overlap in activation for each dimension in traditional emotion areas (bilateral insulae, orbitofrontal cortex, amygdalae), we also found activation specific to each dimension with characteristic relationships between modulations of these dimensions and BOLD signal change. Increases in arousal ratings were related to increased activations predominantly in voice-sensitive cortices after variance explained by valence had been removed. In contrast, emotions of extreme valence were related to increased activations in bilateral voice-sensitive cortices, hippocampi, anterior and midcingulum and medial orbito- and superior frontal regions after variance explained by arousal had been accounted for. Our results therefore do not support a complete segregation of brain structures underpinning the processing of affective dimensions

    Effects of sexually dimorphic shape cues on neurophysiological correlates of women’s face processing

    Get PDF
    Sexual dimorphism (i.e., masculinity in males and femininity in females) is known to affect social perceptions that are important for both mate choice and intrasexual competition, such as attractiveness and dominance. Little is known, however, about the neurophysiological underpinnings mediating sexual dimorphism’s effects on face processing. Here we investigate the neurological correlates of processing sexually dimorphic faces using event-related potentials (ERPs). We employed image transformation techniques to enhance and reduce the sexually dimorphic shape features of male and female faces viewed by women performing a sex categorization task. Sexual dimorphism modulated superior-central N250 magnitude and the peak latency of the N170 and P200. The sex of the face further modulated the amplitude of the P200. These findings extend prior work linking the superior-central N250 to social categorization processes triggered by face shape, and strengthen its functional interpretation in terms of coarse- versus fine-grained categorical judgements. We conclude that ERPs can illuminate the cognitive mechanisms (i.e., mental processes) underlying behavioral responses to sexual dimorphism

    Neuromodulation of Right Auditory Cortex Selectively Increases Activation in Speech-Related Brain Areas in Brainstem Auditory Agnosia

    Get PDF
    Auditory agnosia is an inability to make sense of sound that cannot be explained by deficits in low-level hearing. In view of recent promising results in the area of neurorehabilitation of language disorders after stroke, we examined the effect of transcranial direct current stimulation (tDCS) in a young woman with general auditory agnosia caused by traumatic injury to the left inferior colliculus. Specifically, we studied activations to sound embedded in a block design using functional magnetic resonance imaging before and after application of anodal tDCS to the right auditory cortex. Before tDCS, auditory discrimination deficits were associated with abnormally reduced activations of the auditory cortex and bilateral unresponsiveness of the anterior superior temporal sulci and gyri. This session replicated a previous functional scan with the same paradigm a year before the current experiment. We then applied anodal tDCS over right auditory cortex for 20 min-utes and immediately re-scanned the patient. We found increased activation of bilateral auditory cortices and, for speech sounds, selectively increased activation in Broca’s and Wernicke’s areas. Future research might consider the long-term behavioral effects after neurostimulation in auditory agnosia and its potential use in the neurorehabilitation of more general auditory disorders

    Adaptation to vocal expressions and phonemes is intact in autism spectrum disorder

    Get PDF
    Several recent studies have demonstrated reduced visual aftereffects, particularly to social stimuli, in autism spectrum disorder (ASD). This putative impairment of the adaptive mechanism in ASD has been put forward as a possible explanation for some of the core social problems experienced by children with ASD (e.g., facial emotion or identity recognition). We addressed this claim in children with ASD and typically developing children by using an established methodology and morphed auditory stimulus set for eliciting robust aftereffects to vocal expressions and phonemes. Although children with ASD were significantly worse at categorizing the vocal expressions compared with the control stimuli (phoneme categorization), aftereffect sizes in both tasks were identical in the two participant groups. Our finding suggests that the adaptation mechanism is not universally impaired in ASD and is therefore not an explanation for the social perception difficulties in ASD

    Norm-based coding of voice identity in human auditory cortex

    Get PDF
    Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social interactions. The human brain contains temporal voice areas (TVA) [1] involved in an acoustic-based representation of voice identity [2, 3, 4, 5 and 6], but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism [4, 7, 8, 9, 10 and 11]. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)—approximated here by averaging a large number of same-gender voices by using morphing [12]. Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices—a phenomenon not merely explained by neuronal adaptation [13 and 14]. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity
    corecore